3 research outputs found
Meta Reinforcement Learning with Latent Variable Gaussian Processes
Learning from small data sets is critical in many practical applications
where data collection is time consuming or expensive, e.g., robotics, animal
experiments or drug design. Meta learning is one way to increase the data
efficiency of learning algorithms by generalizing learned concepts from a set
of training tasks to unseen, but related, tasks. Often, this relationship
between tasks is hard coded or relies in some other way on human expertise. In
this paper, we frame meta learning as a hierarchical latent variable model and
infer the relationship between tasks automatically from data. We apply our
framework in a model-based reinforcement learning setting and show that our
meta-learning model effectively generalizes to novel tasks by identifying how
new tasks relate to prior ones from minimal data. This results in up to a 60%
reduction in the average interaction time needed to solve tasks compared to
strong baselines.Comment: 11 pages, 7 figure
Probabilistic Active Meta-Learning
Data-efficient learning algorithms are essential in many practical
applications where data collection is expensive, e.g., in robotics due to the
wear and tear. To address this problem, meta-learning algorithms use prior
experience about tasks to learn new, related tasks efficiently. Typically, a
set of training tasks is assumed given or randomly chosen. However, this
setting does not take into account the sequential nature that naturally arises
when training a model from scratch in real-life: how do we collect a set of
training tasks in a data-efficient manner? In this work, we introduce task
selection based on prior experience into a meta-learning algorithm by
conceptualizing the learner and the active meta-learning setting using a
probabilistic latent variable model. We provide empirical evidence that our
approach improves data-efficiency when compared to strong baselines on
simulated robotic experiments
Learning Contact Dynamics using Physically Structured Neural Networks
Learning physically structured representations of dynamical systems that
include contact between different objects is an important problem for
learning-based approaches in robotics. Black-box neural networks can learn to
approximately represent discontinuous dynamics, but they typically require
large quantities of data and often suffer from pathological behaviour when
forecasting for longer time horizons. In this work, we use connections between
deep neural networks and differential equations to design a family of deep
network architectures for representing contact dynamics between objects. We
show that these networks can learn discontinuous contact events in a
data-efficient manner from noisy observations in settings that are
traditionally difficult for black-box approaches and recent physics inspired
neural networks. Our results indicate that an idealised form of touch feedback
-- which is heavily relied upon by biological systems -- is a key component of
making this learning problem tractable. Together with the inductive biases
introduced through the network architectures, our techniques enable accurate
learning of contact dynamics from observations